13 research outputs found

    Land Use And Land Cover Classification And Change Detection Using Naip Imagery From 2009 To 2014: Table Rock Lake Region, Missouri

    Get PDF
    Land use and land cover (LULC) of Table Rock Lake (TRL) region has changed over the last half century after the construction of Table Rock Dam in 1959. This study uses one meter spatial resolution imagery to classify and detect the change of LULC of three typical waterside TRL regions. The main objectives are to provide an efficient and reliable classification workflow for regional level NAIP aerial imagery and identify the dynamic patterns for study areas. Seven class types are extracted by optimal classification results from year 2009, 2010, 2012 and 2014 of Table Rock Village, Kimberling City and Indian Point. Pixel-based post-classification comparison generated from-to” confusion matrices showing the detailed change patterns. I conclude that object-based random trees achieve the highest overall accuracy and kappa value, compared with the other six classification approaches, and is efficient to make a LULC classification map. Major change patterns are that vegetation, including trees and grass, increased during the last five years period while residential extension and urbanization process is not obvious to indicate high economic development in the TRL region. By adding auxiliary spatial information and object-based post-classification techniques, an improved classification procedure can be utilized for LULC change detection projects at the region level

    Taking the pulse of COVID-19: A spatiotemporal perspective

    Full text link
    The sudden outbreak of the Coronavirus disease (COVID-19) swept across the world in early 2020, triggering the lockdowns of several billion people across many countries, including China, Spain, India, the U.K., Italy, France, Germany, and most states of the U.S. The transmission of the virus accelerated rapidly with the most confirmed cases in the U.S., and New York City became an epicenter of the pandemic by the end of March. In response to this national and global emergency, the NSF Spatiotemporal Innovation Center brought together a taskforce of international researchers and assembled implemented strategies to rapidly respond to this crisis, for supporting research, saving lives, and protecting the health of global citizens. This perspective paper presents our collective view on the global health emergency and our effort in collecting, analyzing, and sharing relevant data on global policy and government responses, geospatial indicators of the outbreak and evolving forecasts; in developing research capabilities and mitigation measures with global scientists, promoting collaborative research on outbreak dynamics, and reflecting on the dynamic responses from human societies.Comment: 27 pages, 18 figures. International Journal of Digital Earth (2020

    Optimal methodology for detecting land cover change in a forestry, lakeside environment using NAIP imagery

    No full text
    Mapping land cover change is useful for various environmental and urban planning applications, e.g. land management, forest conservation, ecological assessment, transportation planning, and impervious surface control. As the optimal change detection approaches, algorithms, and parameters often depend on the phenomenon of interest and the remote sensing imagery used, the goal of this study is to find the optimal procedure for detecting urban growth in rural, forestry areas using onemeter, four-band NAIP images. Focusing on different types of impervious covers, the authors test the optimal segmentation parameters for object-based image analysis, and conclude that the random tree classifier, among the six classifiers compared, is most optimal for land use/cover change detection analysis with a satisfying overall accuracy of 87.7%. With continuous free coverage of NAIP images, the optimal change detection procedure concluded in this study is valuable for future analyses of urban growth change detection in rural, forestry environments

    An Open-Source Workflow for Spatiotemporal Studies with COVID-19 as an Example

    No full text
    Many previous studies have shown that open-source technologies help democratize information and foster collaborations to enable addressing global physical and societal challenges. The outbreak of the novel coronavirus has imposed unprecedented challenges to human society. It affects every aspect of livelihood, including health, environment, transportation, and economy. Open-source technologies provide a new ray of hope to collaboratively tackle the pandemic. The role of open source is not limited to sharing a source code. Rather open-source projects can be adopted as a software development approach to encourage collaboration among researchers. Open collaboration creates a positive impact in society and helps combat the pandemic effectively. Open-source technology integrated with geospatial information allows decision-makers to make strategic and informed decisions. It also assists them in determining the type of intervention needed based on geospatial information. The novelty of this paper is to standardize the open-source workflow for spatiotemporal research. The highlights of the open-source workflow include sharing data, analytical tools, spatiotemporal applications, and results and formalizing open-source software development. The workflow includes (i) developing open-source spatiotemporal applications, (ii) opening and sharing the spatiotemporal resources, and (iii) replicating the research in a plug and play fashion. Open data, open analytical tools and source code, and publicly accessible results form the foundation for this workflow. This paper also presents a case study with the open-source spatiotemporal application development for air quality analysis in California, USA. In addition to the application development, we shared the spatiotemporal data, source code, and research findings through the GitHub repository

    An Open-Source Workflow for Spatiotemporal Studies with COVID-19 as an Example

    No full text
    Many previous studies have shown that open-source technologies help democratize information and foster collaborations to enable addressing global physical and societal challenges. The outbreak of the novel coronavirus has imposed unprecedented challenges to human society. It affects every aspect of livelihood, including health, environment, transportation, and economy. Open-source technologies provide a new ray of hope to collaboratively tackle the pandemic. The role of open source is not limited to sharing a source code. Rather open-source projects can be adopted as a software development approach to encourage collaboration among researchers. Open collaboration creates a positive impact in society and helps combat the pandemic effectively. Open-source technology integrated with geospatial information allows decision-makers to make strategic and informed decisions. It also assists them in determining the type of intervention needed based on geospatial information. The novelty of this paper is to standardize the open-source workflow for spatiotemporal research. The highlights of the open-source workflow include sharing data, analytical tools, spatiotemporal applications, and results and formalizing open-source software development. The workflow includes (i) developing open-source spatiotemporal applications, (ii) opening and sharing the spatiotemporal resources, and (iii) replicating the research in a plug and play fashion. Open data, open analytical tools and source code, and publicly accessible results form the foundation for this workflow. This paper also presents a case study with the open-source spatiotemporal application development for air quality analysis in California, USA. In addition to the application development, we shared the spatiotemporal data, source code, and research findings through the GitHub repository

    Spatiotemporal analysis of medical resource deficiencies in the U.S. under COVID-19 pandemic.

    No full text
    Coronavirus disease 2019 (COVID-19) was first identified in December 2019 in Wuhan, China as an infectious disease, and has quickly resulted in an ongoing pandemic. A data-driven approach was developed to estimate medical resource deficiencies due to medical burdens at county level during the COVID-19 pandemic. The study duration was mainly from February 15, 2020 to May 1, 2020 in the U.S. Multiple data sources were used to extract local population, hospital beds, critical care staff, COVID-19 confirmed case numbers, and hospitalization data at county level. We estimated the average length of stay from hospitalization data at state level, and calculated the hospitalized rate at both state and county level. Then, we developed two medical resource deficiency indices that measured the local medical burden based on the number of accumulated active confirmed cases normalized by local maximum potential medical resources, and the number of hospitalized patients that can be supported per ICU bed per critical care staff, respectively. Data on medical resources, and the two medical resource deficiency indices are illustrated in a dynamic spatiotemporal visualization platform based on ArcGIS Pro Dashboards. Our results provided new insights into the U.S. pandemic preparedness and local dynamics relating to medical burdens in response to the COVID-19 pandemic

    Big Earth data analytics: a survey

    No full text
    Big Earth data are produced from satellite observations, Internet-of-Things, model simulations, and other sources. The data embed unprecedented insights and spatiotemporal stamps of relevant Earth phenomena for improving our understanding, responding, and addressing challenges of Earth sciences and applications. In the past years, new technologies (such as cloud computing, big data and artificial intelligence) have gained momentum in addressing the challenges of using big Earth data for scientific studies and geospatial applications historically intractable. This paper reviews the big Earth data analytics from several aspects to capture the latest advancements in this fast-growing domain. We first introduce the concepts of big Earth data. The architecture, various functionalities, and supporting modules are then reviewed from a generic methodology aspect. Analytical methods supporting the functionalities are surveyed and analyzed in the context of different tools. The driven questions are exemplified through cutting-edge Earth science researches and applications. A list of challenges and opportunities are proposed for different stakeholders to collaboratively advance big Earth data analytics in the near future

    PreciPatch: A Dictionary-based Precipitation Downscaling Method

    No full text
    Climate and weather data such as precipitation derived from Global Climate Models (GCMs) and satellite observations are essential for the global and local hydrological assessment. However, most climatic popular precipitation products (with spatial resolutions coarser than 10km) are too coarse for local impact studies and require “downscaling” to obtain higher resolutions. Traditional precipitation downscaling methods such as statistical and dynamic downscaling require an input of additional meteorological variables, and very few are applicable for downscaling hourly precipitation for higher spatial resolution. Based on dynamic dictionary learning, we propose a new downscaling method, PreciPatch, to address this challenge by producing spatially distributed higher resolution precipitation fields with only precipitation input from GCMs at hourly temporal resolution and a large geographical extent. Using aggregated Integrated Multi-satellitE Retrievals for GPM (IMERG) data, an experiment was conducted to evaluate the performance of PreciPatch, in comparison with bicubic interpolation using RainFARM—a stochastic downscaling method, and DeepSD—a Super-Resolution Convolutional Neural Network (SRCNN) based downscaling method. PreciPatch demonstrates better performance than other methods for downscaling short-duration precipitation events (used historical data from 2014 to 2017 as the training set to estimate high-resolution hourly events in 2018)

    COVID-Scraper: An Open-Source Toolset for Automatically Scraping and Processing Global Multi-Scale Spatiotemporal COVID-19 Records

    No full text
    In 2019, COVID-19 quickly spread across the world, infecting billions of people and disrupting the normal lives of citizens in every country. Governments, organizations, and research institutions all over the world are dedicating vast resources to research effective strategies to fight this rapidly propagating virus. With virus testing, most countries publish the number of confirmed cases, dead cases, recovered cases, and locations routinely through various channels and forms. This important data source has enabled researchers worldwide to perform different COVID-19 scientific studies, such as modeling this virus’s spreading patterns, developing prevention strategies, and studying the impact of COVID-19 on other aspects of society. However, one major challenge is that there is no standardized, updated, and high-quality data product that covers COVID-19 cases data internationally. This is because different countries may publish their data in unique channels, formats, and time intervals, which hinders researchers from fetching necessary COVID-19 datasets effectively, especially for fine-scale studies. Although existing solutions such as John’s Hopkins COVID-19 Dashboard and 1point3acres COVID-19 tracker are widely used, it is difficult for users to access their original dataset and customize those data to meet specific requirements in categories, data structure, and data source selection. To address this challenge, we developed a toolset using cloud-based web scraping to extract, refine, unify, and store COVID-19 cases data at multiple scales for all available countries around the world automatically. The toolset then publishes the data for public access in an effective manner, which could offer users a real time COVID-19 dynamic dataset with a global view. Two case studies are presented about how to utilize the datasets. This toolset can also be easily extended to fulfill other purposes with its open-source nature
    corecore